fix: 增强核心请求的容错能力与 Provider 兼容性优化#5610
Conversation
…arify usage in multimodal tasks
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! 此拉取请求主要解决了两个关键问题:一是提升了对非标准OpenAI API响应的兼容性,特别是处理了可能缺失的 Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Hey - 我发现了 1 个问题,并留下了一些高层次的反馈:
- 在
_query_stream的错误处理逻辑中,你现在会在 warning 级别、并带有exc_info=True的情况下记录整个chunk对象;建议考虑截断/结构化这些数据,或者将其日志级别降低到 debug,以避免日志过大以及潜在的敏感内容泄露。 _iter_llm_responses_with_fallback中新的空响应回退条件目前只检查resp.completion_text和resp.tools_call_args;如果还有其他非文本输出(例如图片、函数调用 ID 或元数据)也可以代表有效回复,你可能需要把它们也纳入“空响应”的判断中,以避免误触发回退逻辑。
给 AI Agent 的提示
Please address the comments from this code review:
## Overall Comments
- In the `_query_stream` error handler you now log the entire `chunk` object at warning level with `exc_info=True`; consider either truncating/structuring this data or lowering it to debug to avoid excessively large logs and potential leakage of sensitive content.
- The new empty-response fallback condition in `_iter_llm_responses_with_fallback` only checks `resp.completion_text` and `resp.tools_call_args`; if other non-textual outputs (e.g., images, function call ids, or metadata) can represent a valid reply, you may want to include them in the emptiness check to avoid incorrectly triggering fallback.
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="318-324" />
<code_context>
+ if tool_call.type is None:
+ tool_call.type = "function"
+
try:
state.handle_chunk(chunk)
except Exception as e:
- logger.warning("Saving chunk state error: " + str(e))
+ logger.warning(
+ f"Saving chunk state error: {type(e).__name__}: {e}. Chunk data: {chunk}",
+ exc_info=True,
+ )
</code_context>
<issue_to_address>
**🚨 suggestion (security):** Logging full chunk objects on errors may be heavy and potentially sensitive; consider truncating or summarizing.
Logging the full `chunk` can inflate log volume and may include user content you don’t want stored verbatim. Consider logging a truncated/summary view (e.g., `repr(chunk)[:N]` or just key identifiers) or moving the full payload behind a debug-level log, while keeping the exception type/message at warning level.
```suggestion
try:
state.handle_chunk(chunk)
except Exception as e:
# Log a truncated summary of the chunk at warning level to avoid
# excessive log volume and potential sensitive data exposure.
max_chunk_log_len = 500
chunk_repr = repr(chunk)
if len(chunk_repr) > max_chunk_log_len:
chunk_repr = chunk_repr[:max_chunk_log_len] + "... [truncated]"
logger.warning(
"Saving chunk state error: %s: %s. Chunk summary (repr, max %d chars): %s",
type(e).__name__,
e,
max_chunk_log_len,
chunk_repr,
exc_info=True,
)
# Keep full payload only at debug level for deeper troubleshooting.
logger.debug("Full chunk data for error: %r", chunk, exc_info=True)
```
</issue_to_address>帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据反馈改进后续的审查质量。
Original comment in English
Hey - I've found 1 issue, and left some high level feedback:
- In the
_query_streamerror handler you now log the entirechunkobject at warning level withexc_info=True; consider either truncating/structuring this data or lowering it to debug to avoid excessively large logs and potential leakage of sensitive content. - The new empty-response fallback condition in
_iter_llm_responses_with_fallbackonly checksresp.completion_textandresp.tools_call_args; if other non-textual outputs (e.g., images, function call ids, or metadata) can represent a valid reply, you may want to include them in the emptiness check to avoid incorrectly triggering fallback.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In the `_query_stream` error handler you now log the entire `chunk` object at warning level with `exc_info=True`; consider either truncating/structuring this data or lowering it to debug to avoid excessively large logs and potential leakage of sensitive content.
- The new empty-response fallback condition in `_iter_llm_responses_with_fallback` only checks `resp.completion_text` and `resp.tools_call_args`; if other non-textual outputs (e.g., images, function call ids, or metadata) can represent a valid reply, you may want to include them in the emptiness check to avoid incorrectly triggering fallback.
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="318-324" />
<code_context>
+ if tool_call.type is None:
+ tool_call.type = "function"
+
try:
state.handle_chunk(chunk)
except Exception as e:
- logger.warning("Saving chunk state error: " + str(e))
+ logger.warning(
+ f"Saving chunk state error: {type(e).__name__}: {e}. Chunk data: {chunk}",
+ exc_info=True,
+ )
</code_context>
<issue_to_address>
**🚨 suggestion (security):** Logging full chunk objects on errors may be heavy and potentially sensitive; consider truncating or summarizing.
Logging the full `chunk` can inflate log volume and may include user content you don’t want stored verbatim. Consider logging a truncated/summary view (e.g., `repr(chunk)[:N]` or just key identifiers) or moving the full payload behind a debug-level log, while keeping the exception type/message at warning level.
```suggestion
try:
state.handle_chunk(chunk)
except Exception as e:
# Log a truncated summary of the chunk at warning level to avoid
# excessive log volume and potential sensitive data exposure.
max_chunk_log_len = 500
chunk_repr = repr(chunk)
if len(chunk_repr) > max_chunk_log_len:
chunk_repr = chunk_repr[:max_chunk_log_len] + "... [truncated]"
logger.warning(
"Saving chunk state error: %s: %s. Chunk summary (repr, max %d chars): %s",
type(e).__name__,
e,
max_chunk_log_len,
chunk_repr,
exc_info=True,
)
# Keep full payload only at debug level for deeper troubleshooting.
logger.debug("Full chunk data for error: %r", chunk, exc_info=True)
```
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Code Review
This Pull Request introduces compatibility fixes for non-standard OpenAI adapter responses in openai_source.py, ensuring tool_call.type is not empty in streaming responses, and improves error logging. It also adds a fallback mechanism in tool_loop_agent_runner.py for non-streaming requests with empty upstream responses, enhancing system robustness and debuggability. A potential security vulnerability was identified in openai_source.py where logging raw LLM response chunks could lead to the leakage of sensitive information or PII. It is recommended to sanitize or remove this raw data from log messages. Additionally, there is an opportunity to improve code readability by reducing nesting, which has been detailed in a specific review comment.
There was a problem hiding this comment.
Hey - 我在这里给了一些整体性的反馈:
- 在
openai_source._query_stream中规范化tool_call.type时,建议仅当 SDK 对象实际包含function负载(例如tool_call.function is not None)时,才强制将其设为'function',以避免未来的非function工具类型或其他扩展被无意间覆盖。 _is_empty_llm_response启发式当前忽略了resp.role;如果提供方未来返回那种文本为空但role != 'assistant'的、类似错误的响应,你可能需要对这类情况做特殊处理,避免将其误判为“空的助手回复”而触发回退逻辑。
给 AI 代理的提示词
Please address the comments from this code review:
## Overall Comments
- When normalizing `tool_call.type` in `openai_source._query_stream`, consider only forcing it to `'function'` if the SDK object actually has a `function` payload (e.g. `tool_call.function is not None`) so that future non-`function` tool types or other extensions are not inadvertently masked.
- The `_is_empty_llm_response` heuristic currently ignores `resp.role`; if the provider ever returns error-like responses with empty text but `role != 'assistant'`, you may want to special-case those to avoid incorrectly treating them as empty assistant replies and triggering fallback.帮我变得更有用!请对每条评论点 👍 或 👎,我会根据你的反馈改进后续的评审。
Original comment in English
Hey - I've left some high level feedback:
- When normalizing
tool_call.typeinopenai_source._query_stream, consider only forcing it to'function'if the SDK object actually has afunctionpayload (e.g.tool_call.function is not None) so that future non-functiontool types or other extensions are not inadvertently masked. - The
_is_empty_llm_responseheuristic currently ignoresresp.role; if the provider ever returns error-like responses with empty text butrole != 'assistant', you may want to special-case those to avoid incorrectly treating them as empty assistant replies and triggering fallback.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- When normalizing `tool_call.type` in `openai_source._query_stream`, consider only forcing it to `'function'` if the SDK object actually has a `function` payload (e.g. `tool_call.function is not None`) so that future non-`function` tool types or other extensions are not inadvertently masked.
- The `_is_empty_llm_response` heuristic currently ignores `resp.role`; if the provider ever returns error-like responses with empty text but `role != 'assistant'`, you may want to special-case those to avoid incorrectly treating them as empty assistant replies and triggering fallback.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
_is_empty_llm_response仅对角色为assistant或tool的情况做空回复判断
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - 我在这里给出了一些整体性的反馈:
- 在
step()中检查 assistant 是否为空时,可以考虑复用_is_empty_llm_response,而不是再次实现一套空响应启发式逻辑,这样如果之后更新启发式逻辑,各处行为也能保持一致。 - 在
_iter_llm_responses_with_fallback中记录空响应回退日志时,可以考虑像openai_source.py新增的日志那样,包含模型/服务提供方的标识,这样更容易追踪是哪一个上游返回了空回复。 _query_stream中对于每个缺少tool_call.type的分片都打 debug 日志,在长流式响应里可能会非常嘈杂;可以考虑每个流只记录一次,或者在首次出现之后改用更低的日志级别。
面向 AI Agent 的提示词
Please address the comments from this code review:
## Overall Comments
- Consider reusing `_is_empty_llm_response` in the `step()` empty-assistant check instead of duplicating the empty-response heuristic so behavior stays consistent if the heuristic is updated later.
- When logging the empty-response fallback in `_iter_llm_responses_with_fallback`, you might include model/provider identifiers (similar to the new logging in `openai_source.py`) to make it easier to trace which upstream returned the empty reply.
- The debug log for each chunk with missing `tool_call.type` in `_query_stream` could become noisy for long streams; consider logging only once per stream or at a lower verbosity level after the first occurrence.帮我变得更有用!请对每一条评论点 👍 或 👎,我会根据你的反馈改进后续的评审质量。
Original comment in English
Hey - I've left some high level feedback:
- Consider reusing
_is_empty_llm_responsein thestep()empty-assistant check instead of duplicating the empty-response heuristic so behavior stays consistent if the heuristic is updated later. - When logging the empty-response fallback in
_iter_llm_responses_with_fallback, you might include model/provider identifiers (similar to the new logging inopenai_source.py) to make it easier to trace which upstream returned the empty reply. - The debug log for each chunk with missing
tool_call.typein_query_streamcould become noisy for long streams; consider logging only once per stream or at a lower verbosity level after the first occurrence.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Consider reusing `_is_empty_llm_response` in the `step()` empty-assistant check instead of duplicating the empty-response heuristic so behavior stays consistent if the heuristic is updated later.
- When logging the empty-response fallback in `_iter_llm_responses_with_fallback`, you might include model/provider identifiers (similar to the new logging in `openai_source.py`) to make it easier to trace which upstream returned the empty reply.
- The debug log for each chunk with missing `tool_call.type` in `_query_stream` could become noisy for long streams; consider logging only once per stream or at a lower verbosity level after the first occurrence.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
SourceryAI
left a comment
There was a problem hiding this comment.
Hey - 我发现了两个问题,并给出了一些整体性的反馈:
- 在
_is_empty_llm_response中,所有非Comp.Plain的组件都被视为有意义的内容;如果result_chain里存在仅元数据用途的组件类型,建议改为“白名单”明确哪些组件类型算作有意义内容,以避免在结构上为空但包含非 Plain 组件的 chunk 时错误地阻止 fallback。 - 在
OpenAISource._query_stream中,目前为tool_call.type使用了硬编码字符串"function",建议改用共享常量或 SDK 提供的枚举(如果有的话),以避免底层类型值变更时出现偏差,并使意图更加清晰。
给 AI Agents 的提示词
请根据这份代码评审中的评论进行修改:
## 总体评论
- 在 `_is_empty_llm_response` 中,所有非 `Comp.Plain` 的组件都被视为有意义的内容;如果 `result_chain` 里存在仅元数据用途的组件类型,建议改为“白名单”明确哪些组件类型算作有意义内容,以避免在结构上为空但包含非 Plain 组件的 chunk 时错误地阻止 fallback。
- 在 `OpenAISource._query_stream` 中,目前为 `tool_call.type` 使用了硬编码字符串 `"function"`,建议改用共享常量或 SDK 提供的枚举(如果有的话),以避免底层类型值变更时出现偏差,并使意图更加清晰。
## 逐条评论
### 评论 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="332-338" />
<code_context>
has_stream_output = False
try:
async for resp in self._iter_llm_responses(include_model=idx == 0):
+ # 对于流式 chunk,不立即检查是否为空,因为单个 chunk 可能只是元数据/心跳
</code_context>
<issue_to_address>
**suggestion:** 使用 logger.exception(或包含 exc_info)来保留 traceback,以便在 chunk 处理失败时保留完整的异常栈信息。
当前仅记录了异常消息,这会让协议/流处理相关问题更难排查。请在日志中记录 traceback,例如:
```python
try:
state.handle_chunk(chunk)
except Exception:
logger.exception(
"[%s] Saving chunk state error (provider: %s)",
self.get_model(),
self.provider_config.get("id", "unknown"),
)
```
这样既保留了模型/提供方上下文信息,又能暴露完整的调用栈。
```suggestion
try:
state.handle_chunk(chunk)
except Exception:
logger.exception(
"[%s] Saving chunk state error (provider: %s)",
self.get_model(),
self.provider_config.get("id", "unknown"),
)
```
</issue_to_address>
### 评论 2
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="224" />
<code_context>
else:
yield await self.provider.text_chat(**payload)
+ def _is_empty_llm_response(self, resp: LLMResponse) -> bool:
+ """Check if an LLM response is effectively empty.
+
</code_context>
<issue_to_address>
**issue (complexity):** 建议将新的空响应处理逻辑拆分成更小的辅助函数和具名布尔变量,在保持行为不变的前提下,简化控制流并提升可读性。
你可以保留当前新增的行为,但通过拆分职责、使用更具声明性的辅助方法来降低控制流复杂度。
**1. 拆分 `_is_empty_llm_response` 并简化循环**
相比手动修改标志位与单个职责混杂的辅助函数,可以拆分成更小的辅助函数并利用 `any()`:
```python
def _has_text_content(self, resp: LLMResponse) -> bool:
return bool((resp.completion_text or "").strip() or (resp.reasoning_content or "").strip())
def _has_result_chain_content(self, resp: LLMResponse) -> bool:
if not resp.result_chain or not resp.result_chain.chain:
return False
return any(
(
isinstance(comp, Comp.Plain) and bool(comp.text and comp.text.strip())
) or (
not isinstance(comp, Comp.Plain) # non-Plain components are valid content
)
for comp in resp.result_chain.chain
)
def _is_empty_llm_response(self, resp: LLMResponse) -> bool:
return (
not self._has_text_content(resp)
and not resp.tools_call_args
and not self._has_result_chain_content(resp)
)
```
这种写法在保持语义不变的同时,减少了分支,并让每条规则都可以独立测试。
**2. 为 fallback 条件中的布尔表达式命名**
在 `_iter_llm_responses_with_fallback` 中,将复合的 `if` 条件拆分成具名变量,有助于快速理解:
```python
is_assistant_or_tool = resp.role in ("assistant", "tool")
is_empty = self._is_empty_llm_response(resp)
should_try_fallback = is_assistant_or_tool and is_empty and not is_last_candidate
if should_try_fallback:
logger.warning(
"Chat Model %s returns empty response, trying fallback to next provider.",
candidate_id,
)
break
```
这样可以保持原有逻辑,同时提高可读性。
**3. 压缩错误上下文的构造方式**
`step` 中错误信息的构造可以更简洁一些,同时保留相同的上下文信息:
```python
base_msg = "LLM returned empty assistant message with no tool calls."
ctx = {
"model_id": getattr(self.run_context, "model_id", None),
"provider_id": getattr(self.run_context, "provider_id", None),
"run_id": getattr(self.run_context, "run_id", None),
}
context_str = ", ".join(f"{k}={v}" for k, v in ctx.items() if v is not None)
if context_str:
base_msg = f"{base_msg} Context: {context_str}."
raise LLMEmptyResponseError(base_msg)
```
如果你预期在其他地方也会复用这一模式,可以考虑将其抽成一个小的 `_format_empty_response_error()` 辅助函数,即便保持为内联版本,也能让该分支在视觉上更轻量。
</issue_to_address>Hi @a61995987! 👋
感谢你通过评论 @sourcery-ai review 来体验 Sourcery!🚀
安装 sourcery-ai bot 即可在每个 Pull Request 上获得自动代码评审 ✨
帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进后续的评审质量。Original comment in English
Hey - I've found 2 issues, and left some high level feedback:
- In
_is_empty_llm_response, all non-Comp.Plaincomponents are treated as meaningful content; if there are metadata-only component types inresult_chain, consider whitelisting which component types count as meaningful to avoid suppressing fallback on structurally empty but non-plain chunks. - In
OpenAISource._query_stream, the hard-coded string"function"fortool_call.typecould be replaced with a shared constant or SDK enum (if available) to avoid drift if the underlying type values change and to make the intent clearer.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `_is_empty_llm_response`, all non-`Comp.Plain` components are treated as meaningful content; if there are metadata-only component types in `result_chain`, consider whitelisting which component types count as meaningful to avoid suppressing fallback on structurally empty but non-plain chunks.
- In `OpenAISource._query_stream`, the hard-coded string `"function"` for `tool_call.type` could be replaced with a shared constant or SDK enum (if available) to avoid drift if the underlying type values change and to make the intent clearer.
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="332-338" />
<code_context>
has_stream_output = False
try:
async for resp in self._iter_llm_responses(include_model=idx == 0):
+ # 对于流式 chunk,不立即检查是否为空,因为单个 chunk 可能只是元数据/心跳
</code_context>
<issue_to_address>
**suggestion:** Use logger.exception (or include exc_info) to preserve the traceback when chunk handling fails.
Only the exception message is currently logged, which makes diagnosing protocol/stream issues harder. Please log with traceback, e.g.:
```python
try:
state.handle_chunk(chunk)
except Exception:
logger.exception(
"[%s] Saving chunk state error (provider: %s)",
self.get_model(),
self.provider_config.get("id", "unknown"),
)
```
This keeps the model/provider context while exposing the full stack trace.
```suggestion
try:
state.handle_chunk(chunk)
except Exception:
logger.exception(
"[%s] Saving chunk state error (provider: %s)",
self.get_model(),
self.provider_config.get("id", "unknown"),
)
```
</issue_to_address>
### Comment 2
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="224" />
<code_context>
else:
yield await self.provider.text_chat(**payload)
+ def _is_empty_llm_response(self, resp: LLMResponse) -> bool:
+ """Check if an LLM response is effectively empty.
+
</code_context>
<issue_to_address>
**issue (complexity):** Consider decomposing the new empty-response handling into smaller helpers and named booleans to keep the behavior while simplifying control flow and readability.
You can keep the new behavior but reduce control-flow complexity by splitting responsibilities and using more declarative helpers.
**1. Decompose `_is_empty_llm_response` and simplify the loop**
Instead of manual flag mutation and a mixed-responsibility helper, you can factor it into smaller helpers and use `any()`:
```python
def _has_text_content(self, resp: LLMResponse) -> bool:
return bool((resp.completion_text or "").strip() or (resp.reasoning_content or "").strip())
def _has_result_chain_content(self, resp: LLMResponse) -> bool:
if not resp.result_chain or not resp.result_chain.chain:
return False
return any(
(
isinstance(comp, Comp.Plain) and bool(comp.text and comp.text.strip())
) or (
not isinstance(comp, Comp.Plain) # non-Plain components are valid content
)
for comp in resp.result_chain.chain
)
def _is_empty_llm_response(self, resp: LLMResponse) -> bool:
return (
not self._has_text_content(resp)
and not resp.tools_call_args
and not self._has_result_chain_content(resp)
)
```
This keeps the semantics but flattens branching and makes each rule independently testable.
**2. Name the boolean parts of the fallback condition**
The compound `if` in `_iter_llm_responses_with_fallback` becomes easier to scan if intermediate booleans are named:
```python
is_assistant_or_tool = resp.role in ("assistant", "tool")
is_empty = self._is_empty_llm_response(resp)
should_try_fallback = is_assistant_or_tool and is_empty and not is_last_candidate
if should_try_fallback:
logger.warning(
"Chat Model %s returns empty response, trying fallback to next provider.",
candidate_id,
)
break
```
This preserves the logic but improves readability.
**3. Compact error context construction**
The error construction in `step` can be tightened while keeping the same context in the message:
```python
base_msg = "LLM returned empty assistant message with no tool calls."
ctx = {
"model_id": getattr(self.run_context, "model_id", None),
"provider_id": getattr(self.run_context, "provider_id", None),
"run_id": getattr(self.run_context, "run_id", None),
}
context_str = ", ".join(f"{k}={v}" for k, v in ctx.items() if v is not None)
if context_str:
base_msg = f"{base_msg} Context: {context_str}."
raise LLMEmptyResponseError(base_msg)
```
If you expect to reuse this pattern elsewhere, you could move it into a small `_format_empty_response_error()` helper, but even this inline version makes the branch visually lighter.
</issue_to_address>Hi @a61995987! 👋
Thanks for trying out Sourcery by commenting with @sourcery-ai review! 🚀
Install the sourcery-ai bot to get automatic code reviews on every pull request ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.|
@Soulter 请审查 是否需要根据AI给出的建议进行修改? 这可能会让代码变动量变得很大 |
增强 _ensure_img_caption 的容错性避免解析图片失败时造成的框架整体性崩溃
|
该PR变动过大 故会在后续使用干净的分支进行分散提交 暂时关闭 |
fix:兼容openai适配器的非标返回
fix:将上游空回复纳入回退重试
fix: 新增本地图片预压缩机制 避免原图体积过大造成的413错误
fix: 增强 _ensure_img_caption 的容错性避免解析图片失败时造成的框架整体性崩溃
Modifications / 改动点
fix:兼容openai适配器的非标返回:
修改文件:
astrbot\core\provider\sources\openai_source.py_query_stream方法中的async for chunk in stream循环中增加了兼容非标准返回处理的处理 补全可能缺失的补全tool_call.type字段fix:将上游空回复纳入回退重试:
修改文件:
astrbot\core\agent\runners\tool_loop_agent_runner.py_is_empty_llm_response_iter_llm_responses_with_fallback方法中的async for resp in self._iter_llm_responses(include_model=idx == 0)循环中使用_is_empty_llm_response来判断回复是否合法 并在不合法时触发回退机制修改文件:
astrbot\core\exceptions.pyfix: 新增本地图片预压缩机制 避免原图体积过大造成的413错误:
修改文件:
astrbot\core\astr_main_agent.py_compress_image_internal函数 用于对传入的本地图片进行压缩 原样返回远程图片url_ensure_img_caption函数中引入新增的压缩函数_process_quote_message函数中引入新增的函数fix: 增强 _ensure_img_caption 的容错性避免解析图片失败时造成的框架整体性崩溃:
1.修改
_ensure_img_caption函数 增加finally块并清空image_urls避免处理错误时导致的框架整体性崩溃Screenshots or Test Results / 运行截图或测试结果
现在开启流式后可以正常解析非标返回(上游为newapi的gemini)

空回复显示报错:

空回复回退日志:

本地大图压缩:

容错性增强:

Checklist / 检查清单
requirements.txt和pyproject.toml文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations inrequirements.txtandpyproject.toml.由 Sourcery 提供的摘要
处理非标准的 OpenAI 流式响应,并为上游返回空回复的情况添加回退行为。
Bug 修复:
tool_call.type字段,以保持兼容性。功能增强:
Original summary in English
Summary by Sourcery
Handle non-standard OpenAI streaming responses and add fallback behavior for empty upstream replies.
Bug Fixes:
Enhancements:
由 Sourcery 提供的摘要
改进 OpenAI 流式处理和回退检测在 LLM 响应中的健壮性与诊断能力。
错误修复(Bug Fixes):
tool_call.type字段来规范化非标准的 OpenAI 流式数据块,以保持兼容性。功能增强(Enhancements):
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
tool_call.type字段来规范化流式工具调用载荷,以保持与非标准 OpenAI 适配器的兼容性。增强项:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
tool_call.type字段来规范化 OpenAI 流式数据块,从而保持与非标准适配器的兼容性。增强功能:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
tool_call.type字段来规范化流式工具调用载荷,以保持与非标准 OpenAI 适配器的兼容性。增强项:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 流式响应,以保持兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
tool_call.type字段来规范化流式工具调用载荷,以保持与非标准 OpenAI 适配器的兼容性。增强项:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
tool_call.type字段来规范化 OpenAI 流式数据块,从而保持与非标准适配器的兼容性。增强功能:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
tool_call.type字段来规范化流式工具调用载荷,以保持与非标准 OpenAI 适配器的兼容性。增强项:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
由 Sourcery 提供的总结
改进对 LLM 响应处理和 OpenAI 流式集成的健壮性,包括在上游返回空响应时更好的回退行为。
Bug 修复:
tool_call负载,通过补全缺失的type字段来避免分片处理错误。增强功能:
LLMEmptyResponseError,在所有回退策略耗尽后,用带有上下文元数据的方式暴露不可恢复的空 assistant 响应。Original summary in English
Summary by Sourcery
Improve robustness of LLM response handling and OpenAI streaming integration, including better fallback behavior for empty upstream replies.
Bug Fixes:
Enhancements:
Bug 修复:
tool_call负载,通过填充缺失的type字段,避免在处理分片时出现断言错误。增强功能:
LLMEmptyResponseError,在所有回退策略耗尽后仍收到不可恢复的空回复时抛出,并附带上下文元数据以便更容易调试。Original summary in English
由 Sourcery 提供的总结
改进对 LLM 响应处理和 OpenAI 流式集成的健壮性,包括在上游返回空响应时更好的回退行为。
Bug 修复:
tool_call负载,通过补全缺失的type字段来避免分片处理错误。增强功能:
LLMEmptyResponseError,在所有回退策略耗尽后,用带有上下文元数据的方式暴露不可恢复的空 assistant 响应。Original summary in English
Summary by Sourcery
Improve robustness of LLM response handling and OpenAI streaming integration, including better fallback behavior for empty upstream replies.
Bug Fixes:
Enhancements: